With the recent developments in AI-powered assistants and LLMs, such as ChatGPT and Bing, and mainstream acceptance in the enterprise environment growing with the launch of Microsoft Copilot, an analysis of security – the real cybersecurity risks vs the imagined ones – is needed. This article will offer a primer on inherent AI risk and the real-world impacts of attacks on AI applications and machine learning systems.

If you are reading this post, then there’s a good chance you understand the need for security surrounding AI systems. As more and more development teams look at ways to increase the intelligence of their applications, the surrounding teams, including security teams, are struggling to keep up with this new paradigm and this trend is only accelerating.

Security leaders need to start bridging the gap and engaging with teams that are developing applications within the spectrum of AI. AI is transforming the world around us, not only for enterprises but also for society, which means failures have a much more significant impact.

Why You Should Care About AI Security

Everything about AI development and implementation is risky. We as a society tend to overestimate the capabilities of AI, and these misconceptions fuel modern AI product development, turning something understandable into something complex, unexplainable, and, worst of all, unpredictable.

 Everything about AI development and implementation is risky.

AI is not magic. To take this a step further, what we have isn’t all that intelligent either. We have a brute force approach to development that we run many times until we get a result that we deem acceptable. It’s the kind of thing that if a human developer iterated that many times, you’d find unacceptable.

Failures with these systems happen every day. Imagine being denied acceptance to a university without explanation or being arrested for a crime you didn’t commit.

These are real impacts felt by real people.

The science fiction perspective of AI failures is a distraction to the practical applications deployed and used daily. Still, most AI projects never make it into production.

Governments are taking notice as well. A slew of new regulations, both proposed and signed into law, are coming out for regions all over the world. These regulations mandate controls and explainability, as well as an assessment of risk for these published models. Some of those, like the EU’s proposed legislation on harmonized AI, even have prohibitions built-in as well.

Regulatory compliance for AI: What security leaders need to know. Read the blog here.

From a security perspective, ML/DL applications can’t solely be treated as traditional applications. AI systems are a mixture of traditional platforms and new paradigms requiring security teams to adapt when evaluating these systems. This adaptation requires more access to systems than they have had traditionally.

There are requirements for safety, security, privacy, and many others in the development space. Still, there seems to be confusion about who is responsible for what when it comes to AI development. In many cases, the security team, typically the best equipped to provide security feedback, aren’t part of the conversation.

<h2id=”what”>What Makes AI Risky?

When it comes to risk in AI, it’s a combination of factors coming together to create risk. It’s important to realize that developers aren’t purposefully trying to create systems that cause harm. So, if this is the case, let’s look at a few issues creating risk.

Poorly defined problems and goals

Problems can crop up at the very beginning of a project. Often, there is a push for differentiators in application development, and this push for more “intelligence” may come from upper management. This puts additional pressure on application developers to use the technology whether it is a good fit for the problem or not.

Not all problems are good candidates for technologies like machine learning and deep learning, but still, you hear comments such as “We have some data, let’s do something cool” or “let’s sprinkle some ML on it.” These statements are not legitimate goals for an ML project and may cause more harm than good.

Requests for this technology can lead to unintended, negative consequences and build up unnecessary technical debt. ML implementations are rarely “set it and forget it” type of systems. The data may change and shift over time as well as the problem the system was meant to solve.

Velocity of Development

The velocity of development isn’t a problem constrained to the development of machine learning systems. The speed at which systems are developed has been a problem for safety and security for quite some time. When you add the potential unpredictability of machine learning and deep learning applications, it adds an additional layer of risk.

Developers may be reluctant to follow additional risk and security guidelines due to the perception that they get in the way of innovation or throw up unnecessary roadblocks. This reluctance is regardless of any legal requirements to perform certain activities. This perception needs to be managed by risk management and security teams.

The mantra, “move fast and break things,” is a luxury you have when the cost of failure is low. Unfortunately, the cost of failure for many systems is increasing. Even if the risk seems low at first, these systems can become a dependency for a larger system.

Increased Attack Surface

Machine learning code is just a small part at the center of a larger ecosystem of traditional and new platforms. A model does nothing on its own and requires supporting architecture and processes, including sensors, IoT devices, APIs, data streams, backends, and even other machine learning models chained together, to name a few. These components connected and working together create the larger system, and attacks can happen at any exposure point along the way.

Lack of Understanding Around AI Risks

In general, there is a lack of understanding surrounding AI risks. This lack of understanding extends from the stakeholders down to the developers. A recent survey from FICO showed there was confusion about who was responsible for risk and security steps. In addition, these leaders also ranked a decision tree as a higher risk than an artificial neural network, even though a decision tree is inherently explainable.

If you’ve attended an AI developer conference or webcast, if risk is mentioned, they are talking about risks involved in developing AI applications, not risks to or from the AI applications. Governance is also discussed in terms of maximizing ROI and not in ensuring that critical steps are adhered to during the development of the system.

Supply Chain Issues

In the world of AI development, model reuse is encouraged. This reuse means that developers don’t need to start from scratch in their development processes. Pre-trained models are available on many different platforms, including Github, model zoos, and even from cloud providers.

The issue to keep in mind is that you inherit all of the issues with the previous model. So, if that model has issues with bias, by reusing the model, you are amplifying the issue.

You inherit all issues of the model you’re reusing. If that model has issues with bias, you will only amplify the issue.

It’s also possible to have backdoors in models where a particular pattern could be shown to a model to have it take a different action when that pattern is shown. Previously, I wrote a blog post on creating a simple neural network backdoor.

A common theme running across issues relating to risk is lack of visibility. In the previous survey, 65% of Data and AI leaders responded that they couldn’t explain how a model makes decisions or predictions. That’s not good, considering regulations like GDPR have a right to explanation.

Other Characteristics Inherent to AI Applications

There are other characteristics specific to AI systems that can lead to increased exposure and risk.

  • Fragility. Not every attack on AI has to be cutting edge. Machine learning systems can be fragile and break under the best of conditions.
  • Lack of explicit programming logic. In a traditional application, you specify from start to finish the application’s behavior. With machine learning systems, you give it some data, and it learns based on what it sees in the data. It then uses what it learned to apply to future decisions.
  • Model uncertainty. Many of the world’s machine learning and deep learning applications only know what they’ve been shown. For example, ImageNet candidates only know the world by the thousand labeled categories it’s been shown. If you show one of the candidates something outside of those thousand things, it goes with the closest thing it knows.

The AI Threat Landscape

Threats to AI systems are a combination of traditional and new attacks. Attacks against the infrastructure surrounding these systems are well known. But, with intelligent systems, there are also new vectors of attack that need to be accounted for during testing. These attacks would have inputs specifically constructed and focused on the model being attacked.

Types of Machine Learning Attacks & Their Impacts

In this section, I’ll spell out a couple of the most impactful attacks against machine learning systems. These attacks are not an exhaustive list but they cover a couple of the most interesting attacks relating to AI risk.

Model Evasion Attacks
This is a type of machine learning attack where an attacker feeds the model an adversarial input, purposefully perturbed for the given model. Based on this perturbed input, the model makes a different decision. You can think of this type of attack as having the system purposefully misclassify one image as another or classify a message as not being spam when it actually is.
Impact: The model logic is bypassed by an attacker.
Model Poisoning Attacks
These attacks feed “poisoned” data to the system to change the decision boundary to apply to future predictions. Attackers are intentionally providing bad data to the model to re-train it.
Impact: Attackers degrade a model by poisoning it, reducing accuracy and confidence.
Membership Inference Attacks 
These are attacks on privacy rather than an attempt to exploit the model. Supervised learning systems tend to overfit to their training data. This overfitting can be harmful when you’re training on sensitive data because the model will be more confident about things it’s seen before than things it hasn’t seen before. This attack means an attacker could determine if someone was part of a particular training set which could reveal sensitive information about the individual.
Impact: Sensitive data is recovered from your published model.
Model Theft / Functional Extraction 
This happens when an attacker creates an offline copy of your model and uses that to create attacks against the model. Then they can use what they’ve learned back against your production system.
Impact: Your model may be stolen or used to create more accurate attacks against it.
Model Inaccuracies 
Inaccuracies inherent to the model could cause harm to people or groups. Your model may have biases that cause people to be declined for a loan, denied entry to school, or even arrested for a crime they didn’t commit.
Impact: Your inaccurate model causes people harm.
Inaccurate Model Becomes Truth 
To take the previous point to an extreme, imagine a model with inaccuracies is now used as a source of truth. This condition can be much harder to identify and may exist for a long time before being identified. Usage of this model would not allow people any recourse, because the source of the decision was the model.
Impact: Your inaccurate model causes systemic and possibly societal harm.Where to Go with AI Security from Here

Where to Go with AI Security from Here

Now that we’ve covered some of the issues, it’s time to start building up your defenses. Check out some of these AI security resources below or get in touch with our AI security experts here.

Additional Reading

Bookmark